We present SODA: the first publicly available, million-scale high-quality social dialogue dataset. Using SODA, we train COSMO: a generalizable conversation agent outperforming previous best-performing agents on both in- and out-of-domain datasets. In contrast to most existing crowdsourced, small-scale dialogue corpora, we distill 1.5M socially-grounded dialogues from a pre-trained language model (InstructGPT; Ouyang et al., 2022). Dialogues are distilled by contextualizing social commonsense knowledge from a knowledge graph (Atomic10x; West et al., 2022). Human evaluation shows that dialogues in SODA are more consistent, specific, and (surprisingly) natural than prior human-authored datasets - e.g., DailyDialog (Li et al., 2017), BlendedSkillTalk (Smith et al., 2020). In addition, extensive evaluations show that COSMO is significantly more natural and consistent on unseen datasets than best-performing dialogue models - e.g., GODEL (Peng et al., 2022), BlenderBot (Roller et al., 2021), DialoGPT (Zhang et al., 2020). Furthermore, it is sometimes even preferred to the original human-written gold responses. We make our data, models, and code public.
translated by 谷歌翻译
Partial Label (PL) learning refers to the task of learning from the partially labeled data, where each training instance is ambiguously equipped with a set of candidate labels but only one is valid. Advances in the recent deep PL learning literature have shown that the deep learning paradigms, e.g., self-training, contrastive learning, or class activate values, can achieve promising performance. Inspired by the impressive success of deep Semi-Supervised (SS) learning, we transform the PL learning problem into the SS learning problem, and propose a novel PL learning method, namely Partial Label learning with Semi-supervised Perspective (PLSP). Specifically, we first form the pseudo-labeled dataset by selecting a small number of reliable pseudo-labeled instances with high-confidence prediction scores and treating the remaining instances as pseudo-unlabeled ones. Then we design a SS learning objective, consisting of a supervised loss for pseudo-labeled instances and a semantic consistency regularization for pseudo-unlabeled instances. We further introduce a complementary regularization for those non-candidate labels to constrain the model predictions on them to be as small as possible. Empirical results demonstrate that PLSP significantly outperforms the existing PL baseline methods, especially on high ambiguity levels. Code available: https://github.com/changchunli/PLSP.
translated by 谷歌翻译
我们将学习和组成性视为模拟类似人类智力的关键机制。虽然分别通过神经网络和符号AI成功实现了每种机制,但两种机制的组合使人类样智力成为可能。尽管有许多尝试建立混合神经肌的系统,但我们认为我们的真正目标应该是统一学习和组成性,核心机制,而不是神经和象征性方法,而是实现它们的表面方法。在这项工作中,我们通过将其形式和含义(结构和语义)分开,并提出连接的概率程序(CPPS),来审查和分析神经和符号方法的优势和劣势,该框架是连接连接的结构(用于学习)和概率的框架。程序语义(用于组成)。在框架下,我们为小型序列建模设计了CPP扩展,并根据贝叶斯推断提供了一种学习算法。尽管在没有监督的情况下学习复杂模式中存在挑战,但我们的早期结果表明,CPP成功地从原始的顺序数据中提取了概念和关系,这是迈向构图学习的第一步。
translated by 谷歌翻译
基于方面的情感分析(ABSA)是一项精细的情感分析任务,它的重点是检测句子中的情感极性。但是,它始终对多方面的挑战敏感,在句子中,多个方面的特征将相互影响。为了减轻此问题,我们设计了一个新颖的培训框架,称为对比度跨通道数据增强(C3 DA),该框架利用了一个内域的发电机来构建更多的多种相应样本,然后通过对比度模型通过对比度学习的稳健性,从而通过对比度学习的稳健性这些生成的数据。实际上,鉴于生成预审预测的语言模型和一些有限的ABSA标记数据,我们首先采用一些参数效率的方法来执行内域微调。然后,所获得的内域发生器用于从两个通道(即方面增强通道和极性增强通道)生成合成句子,该句子分别在给定的方面和极性上生成句子条件。具体而言,我们的C3 DA以跨渠道的方式执行句子生成以获取更多句子,并提出了熵最小化过滤器以滤除低质量生成的样品。广泛的实验表明,我们的C3 DA可以在准确性和宏观上胜过约1%的基准,而不会增加1%。代码和数据在https://github.com/wangbing1416/c3da中发布。
translated by 谷歌翻译
Wasserstein barycenter, built on the theory of optimal transport, provides a powerful framework to aggregate probability distributions, and it has increasingly attracted great attention within the machine learning community. However, it suffers from severe computational burden, especially for high dimensional and continuous settings. To this end, we develop a novel continuous approximation method for the Wasserstein barycenters problem given sample access to the input distributions. The basic idea is to introduce a variational distribution as the approximation of the true continuous barycenter, so as to frame the barycenters computation problem as an optimization problem, where parameters of the variational distribution adjust the proxy distribution to be similar to the barycenter. Leveraging the variational distribution, we construct a tractable dual formulation for the regularized Wasserstein barycenter problem with c-cyclical monotonicity, which can be efficiently solved by stochastic optimization. We provide theoretical analysis on convergence and demonstrate the practical effectiveness of our method on real applications of subset posterior aggregation and synthetic data.
translated by 谷歌翻译
Context is vital for commonsense moral reasoning. "Lying to a friend" is wrong if it is meant to deceive them, but may be morally okay if it is intended to protect them. Such nuanced but salient contextual information can potentially flip the moral judgment of an action. Thus, we present ClarifyDelphi, an interactive system that elicits missing contexts of a moral situation by generating clarification questions such as "Why did you lie to your friend?". Our approach is inspired by the observation that questions whose potential answers lead to diverging moral judgments are the most informative. We learn to generate questions using Reinforcement Learning, by maximizing the divergence between moral judgements of hypothetical answers to a question. Human evaluation shows that our system generates more relevant, informative and defeasible questions compared to other question generation baselines. ClarifyDelphi assists informed moral reasoning processes by seeking additional morally consequential context to disambiguate social and moral situations.
translated by 谷歌翻译
Pre-trained language models, despite their rapid advancements powered by scale, still fall short of robust commonsense capabilities. And yet, scale appears to be the winning recipe; after all, the largest models seem to have acquired the largest amount of commonsense capabilities. Or is it? In this paper, we investigate the possibility of a seemingly impossible match: can smaller language models with dismal commonsense capabilities (i.e., GPT-2), ever win over models that are orders of magnitude larger and better (i.e., GPT-3), if the smaller models are powered with novel commonsense distillation algorithms? The key intellectual question we ask here is whether it is possible, if at all, to design a learning algorithm that does not benefit from scale, yet leads to a competitive level of commonsense acquisition. In this work, we study the generative models of commonsense knowledge, focusing on the task of generating generics, statements of commonsense facts about everyday concepts, e.g., birds can fly. We introduce a novel commonsense distillation framework, I2D2, that loosely follows the Symbolic Knowledge Distillation of West et al. but breaks the dependence on the extreme-scale models as the teacher model by two innovations: (1) the novel adaptation of NeuroLogic Decoding to enhance the generation quality of the weak, off-the-shelf language models, and (2) self-imitation learning to iteratively learn from the model's own enhanced commonsense acquisition capabilities. Empirical results suggest that scale is not the only way, as novel algorithms can be a promising alternative. Moreover, our study leads to a new corpus of generics, Gen-A-Tomic, that is of the largest and highest quality available to date.
translated by 谷歌翻译
Point-of-Care Ultrasound (POCUS) refers to clinician-performed and interpreted ultrasonography at the patient's bedside. Interpreting these images requires a high level of expertise, which may not be available during emergencies. In this paper, we support POCUS by developing classifiers that can aid medical professionals by diagnosing whether or not a patient has pneumothorax. We decomposed the task into multiple steps, using YOLOv4 to extract relevant regions of the video and a 3D sparse coding model to represent video features. Given the difficulty in acquiring positive training videos, we trained a small-data classifier with a maximum of 15 positive and 32 negative examples. To counteract this limitation, we leveraged subject matter expert (SME) knowledge to limit the hypothesis space, thus reducing the cost of data collection. We present results using two lung ultrasound datasets and demonstrate that our model is capable of achieving performance on par with SMEs in pneumothorax identification. We then developed an iOS application that runs our full system in less than 4 seconds on an iPad Pro, and less than 8 seconds on an iPhone 13 Pro, labeling key regions in the lung sonogram to provide interpretable diagnoses.
translated by 谷歌翻译
Given partial objects and some complete ones as references, point cloud completion aims to recover authentic shapes. However, existing methods pay little attention to general shapes, which leads to the poor authenticity of completion results. Besides, the missing patterns are diverse in reality, but existing methods can only handle fixed ones, which means a poor generalization ability. Considering that a partial point cloud is a subset of the corresponding complete one, we regard them as different samples of the same distribution and propose Structure Retrieval based Point Completion Network (SRPCN). It first uses k-means clustering to extract structure points and disperses them into distributions, and then KL Divergence is used as a metric to find the complete structure point cloud that best matches the input in a database. Finally, a PCN-like decoder network is adopted to generate the final results based on the retrieved structure point clouds. As structure plays an important role in describing the general shape of an object and the proposed structure retrieval method is robust to missing patterns, experiments show that our method can generate more authentic results and has a stronger generalization ability.
translated by 谷歌翻译
作为人类,我们通过我们所有的感官来驾驭世界,使用每个人从每个人纠正其他人。我们介绍了Merlot Reserve,一个模型,该模型是联合随着时间的推移而表示视频的模型 - 通过从音频,字幕和视频帧学习的新培训目标。给出了一个视频,我们用掩模令牌替换文本和音频的片段;该模型通过选择正确的蒙版片段来学习。我们的目标比替代方面更快地学习,并在规模上表现良好:我们预先逼近2000万YouTube视频。经验结果表明,Merlot Reserve学会通过所有组成模式的视频的强烈陈述。在FineTuned时,它在VCR和TVQA上为VCR和TVQA进行了新的最先进,优先于前勤工作分别为5%和7%。消融表明,两个任务都受益于音频预制 - 甚至录像机,围绕图像中心的QA任务(没有声音)。此外,我们的客观使开箱即用的预测,揭示了强大的多式联合致辞理解。在一个完全零拍摄的环境中,我们的模型在四个视频理解任务中获得竞争结果,甚至优于最近提出的定位推理(星)基准的监督方法。我们分析为什么包含音频导致更好的视觉语言表示,这表明未来研究的重要机会。我们通过讨论多式联运预测的道德和社会影响来得出结论。
translated by 谷歌翻译